Showing posts with label Rationality. Show all posts
Showing posts with label Rationality. Show all posts

Friday, November 7, 2014

They're all IQ tests, you just didn't know it

Here's one to file under the category of 'things that may have been obvious to most well-adjusted people, but were at least a little bit surprising to me'.

Many people do not react particularly positively when you tell them what their IQ is, particularly when this information is unsolicited.

Not in the sense of 'I think you're an idiot', or 'you seem very clever'. Broad statements about intelligence, even uncomplimentary ones, are fairly easy to laugh off. If you think someone's a fool, that's just, like, your opinion, man.

What's harder to laugh off is when you put an actual number to their IQ.

Having done this a couple of times now, the first thing you realise is that people are usually surprised that you can do this at all. IQ is viewed as something mysterious, requiring an arcane set of particular tasks like pattern spotting in specially designed pictures, which only trained professionals can ascertain.

The reality is far simpler. Here's the basic cookbook:

1. Take a person's score on any sufficiently cognitively loaded task = X

2. Convert their score to normalised score in the population (i.e. calculate how many standard deviations above or below the mean they are, turning their score into a standard normal distribution). Subtract off the mean score on the test, and divide by the standard deviation of scores on the test. Y = [ X - E(X) ] / [ σ(X)]

3. Convert the standard normal to an IQ score by multiplying the standard normal by 15 and adding 100:
IQ = 100 + 15*Y

That's it.

Because that's all IQ really is - a normal distribution of intelligence with a mean of 100 and a standard deviation of 15.

Okay, but how do you find out a person's score on a large-sample, sufficiently cognitively-loaded task?

Simple - ask them 'what did you get on the SAT?'. Most people will pretty happily tell you this, too.

The SAT pretty much fits all the criteria. It's cognitively demanding, participants were definitely trying their best, and we have tons of data on it. Distributional information is easy to come by - here, for instance. 

You can take their score and convert it to a standard normal as above - for the composite score, the mean is 1497 and the standard deviation is 322. Alternatively you can use the percentile information they give you in the link above and convert that to a standard normal using the NORM.INV function in excel. At least for the people I looked at, the answers only differed by a few IQ points anyway. On the one hand, this takes into account the possibly fat-tailed nature of the distribution, which is good. On the other hand, you're only getting percentiles rounded to a whole number of percent, which is lame. So it's probably a wash.

And from there, you know someone's IQ.

Not only that, but this procedure can be used to answer a number of the classic objections to this kind of thing.

Q1: But I didn't study for it! If I studied, I'm sure I'd have done way better.

A1: Good point. Fortunately, we can estimate how big this effect might be. Researchers have formed estimates of how much test preparation boosts SAT scores after controlling for selection effects. For instance:
When researchers have estimated the effect of commercial test preparation programs on the SAT while taking the above factors into account, the effect of commercial test preparation has appeared relatively small. A comprehensive 1999 study by Don Powers and Don Rock published in the Journal of Educational Measurement estimated a coaching effect on the math section somewhere between 13 and 18 points, and an effect on the verbal section between 6 and 12 points. Powers and Rock concluded that the combined effect of coaching on the SAT I is between 21 and 34 points. Similarly, extensive metanalyses conducted by Betsy Jane Becker in 1990 and by Nan Laird in 1983 found that the typical effect of commercial preparatory courses on the SAT was in the range of 9-25 points on the verbal section, and 15-25 points on the math section. 
So you can optimistically add 50 points onto your score and recalculate. I suspect it will make less difference than you think. If you want a back of the envelope calculation, 50 points is 50/322 = 0.16 standard deviations, or 2.3 IQ points.

Q2: Not everyone in the population takes the SAT, as it's mainly college-bound students, who are considerably smarter than the rest of the population. Your calculations don't take this into account, because they're percentile ranks of SAT takers, not the general population. Surely this fact alone makes me much smarter, right?

A2: Well, sort of. If you're smart enough to think of this objection, paradoxically it probably doesn't make much difference in your case - it has more of an effect for people at the lower IQ end of the scale. The bigger point though, is that this bias is fairly easy to roughly quantify. According to the BLS, 65.9% of high school graduates went on to college. To make things simple, let's add a few assumptions (feel free to complicate them later, I doubt it will change things very much). First, let's assume that everyone who went on to college took the SAT. Second, let's assume that there's a rank ordering of intelligence between college and non-college - the non-SAT cohort is assumed to be uniformly dumber than the SAT cohort, so the dumbest SAT test taker is one place ahead of the smartest non-SAT taker.

So let's say that I'm in the 95th percentile of the SAT distribution. We can use the above fact to work out my percentile in the total population, given I'm assumed to have beaten 100% of the non-SAT population and 95% of the SAT population
Pctile (true) = 0.341 + 0.95*0.659 = 0.967

And from there, we convert to standard normals and IQ. In this example, the 95th percentile is 1.645 standard deviations above the mean, giving an IQ of 125. The 96.7th percentile is 1.839 standard deviations above the mean, or an IQ of 128. A surprisingly small effect, no?

For someone who scored in the 40th percentile of the SAT, however, it moves them from 96 to 104. So still not huge. But the further you go down, the bigger it becomes. Effectively you're taking a weighted average of 100% and whatever your current percentile is, and that makes less difference when your current one is already close to 100.

Of course, the reality is that if someone is offering these objections after you've told them their IQ, chances are they're not really interested in finding out an unbiased estimate of their intelligence, they just want to feel smarter than the number you told them. Perhaps it's better to not offer the ripostes I describe.

Scratch that, perhaps it's better to not offer any unsolicited IQ estimates at all. 

Scratch that, it's almost assuredly better to not offer them. 

But it can be fun if you've judged your audience well and you, like me, occasionally enjoy poking people you know well, particularly if you're confident the person is smart enough that the number won't sound too insulting.

Of course, readers of this august periodical will be both a) entirely comfortable with seeing reality as it is, and thus would nearly all be pleased to get an unbiased estimate of their IQ, and b) are all whip-smart anyway, so the news could only be good regardless.

If that's not the case... well, let's just say that we can paraphrase Eliezer Yudkowsky's advice to 'shut up and multiply', in this context instead as rather 'multiply, but shut up about it'.

The strange thing is that even though people clearly are uncomfortable having their IQ thrown around, they're quite willing to tell you their SAT score, because everybody knows it's just a meaningless test that doesn't measure anything. Until you point out what you can measure with it. 

I strongly suspect that if SAT scores were given as IQ points, people would demand that the whole thing be scrapped. On the other hand, the people liable to get furious were probably not that intelligent anyway, adding further weight to the idea that there might be something to all this after all.

Sunday, September 14, 2014

Of Behavioural Red Flags and Unfunded Campaign Promises

One of the key meta-points of the rationality crowd is that one needs to explicitly think about problem-solving, because one's intuitions will frequently be wrong. In general, sophistication about biases is crucially important - awareness of the possibility that one might be wrong, and being able to spot when this might be occurring. If you don't have that, you'll keep making the same mistakes over and over, because you won't consider that you might have screwed up last time. Instead, the world will just seem confusing or unfair, as unexpected (to you) things keep happening over and over.

For me, there are a number of red flags I have that indicate that I might be screwing something up. They're not ironclad indications of mistakes, but they're nearly always cause to consider problems more carefully.

The first red flag is time-inconsistent preferences (see here and here). When you find yourself repeatedly switching back and forth between preferring X and preferring Not X, this is usually a sign that you're screwing something up. If you go back and forth once or twice, maybe you can write that off as learning  due to new information. But if you keep changing your mind over and over, that's harder to explain. At least in my case, it's typically been due to some form of the hot-cold empathy gap - you make different decisions in cold, rational, calculating states versus hot, emotionally charged states, but in both types of state you fail to forecast how your views will predictably change when you revert back to the previous state. I struggle to think of examples of when repeatedly changing your mind back and forth over something is not in fact an indication of faulty reasoning of some form.

The second red flag is wishing for less information. This isn't always irrational - if you've only got one week to live, it might be entirely sensible to prefer to not find out that your husband or wife cheated on you 40 years ago, and just enjoy the last week in peace. (People tempted to make confessions to those on their deathbed might bear in mind that this is probably actually a selfish act, compounding what was likely an earlier selfish act). But for the most part, wishing to not find something out seems suspicious. Burying one's head in the sand is rarely the best strategy for anything, and the desire to do so seems to be connected to a form of cognitive dissonance - the ego wanting to protect the self-image, rather than admit to the possibility of a mistake. Better advice is to embrace Eugene Gendlin
What is true is already so.
Owning up to it doesn't make it worse.
Not being open about it doesn't make it go away.
And because it's true, it is what is there to be interacted with.
Anything untrue isn't there to be lived.
People can stand what is true,
for they are already enduring it.
The third red flag is persistent deviations between stated and revealed preference (see, for instance, here and here). This is what happens when you say you want X and are willing to pay for it at the current price, and X is within your budget set, and you keep not purchasing X. The stated preference for liking X is belied by the revealed preference to not actually buy it. Being in the budget set is key - if one has a stated preference for sleeping with Scarlett Johannson but is not doing so, this is unlikely to be violating any axioms of expected utility theory, whatever else it may reveal.

Conflicts between stated and revealed preference may be resolved in one of two ways. As I've discussed before, for a long time I had a persistent conflict when it came to learning Spanish. I kept saying I wanted to learn it, and would try half-heartedly with teach yourself Spanish MP3s, but would pretty soon drift off and stop doing it.

This inconsistency can be resolved one of two ways. Firstly, the stated preference could be correct, and I have a self-control problem: Spanish would actually be fun to learn, but due to laziness and procrastination I kept putting it off for more instantly gratifying things. Secondly, the revealed preference could be correct: learning Spanish isn't actually fun for me, which is why I don't persist in it, and the stated preference just means that I like the idea of learning Spanish, probably out of misguided romantic notions of what it will comprise.

Having tried and failed at least twice (see: time-inconsistent preferences), I decided that the second one was true - I actually didn't want to learn Spanish. Of course, time-inconsistency being what it is, every few years it seems like a good idea to do it, and I have to remind myself of why I gave up last time.

Being in the middle of one such bout of mental backsliding recently, I was pondering why the idea of learning another language kept holding appeal to me, even after thinking about the problem as long as I had. I think it comes from the subtle aspect of what revealed preference is, this time repeated with emphasis on the appropriate section:
when you say you want X and are willing to pay for it at the current price, and X is within your budget set, and you keep not purchasing X
Nearly everything comes down to actual willingness to pay. Sure, it would be great to know Spanish. Does that mean it is great to learn Spanish? Probably not. One thinks only of the final end state of knowledge, not of the process of sitting in the car trying to think of the appropriate Spanish phrase for whatever the nice-sounding American man is saying, and worrying if the mental distraction is increasing one's risk of accidents.

Of course, it's in the nature of human beings to resist acknowledging opportunity cost. There's got to be a way to make it work!

And it occurred to me that straight expressions of a desire to do something have a lot in common with unfunded campaign promises. I'll learn the piano! I'll start a blog! I'll read more Russian literature!

These things all take time. If your life has lots of idle hours in it, such as if you've recently been laid off, then great, you can take up new hobbies with gay abandon.

But if your week is more or less filled with stuff already, saying you want to start some new ongoing task is pointless and unwise unless you're willing to specify what you're going to give up to make it happen. There are only so many hours in the week. If you want to spend four of them learning piano, which current activities that you enjoy are you willing to forego? Two dinners with friends? Spending Saturday morning with your kid? Half a week's worth of watching TV on the couch with your boyfriend? What?

If you don't specify exactly what you're willing to give up, you're in the exact same position as politicians promising grand new spending schemes without specifying how they're going to pay for them. And this goes doubly so for ongoing commitments. Starting to listen to the first teach-yourself-Spanish MP3, without figuring out how you're going to make time for the remaining 89 in the series, is just the same as deciding you want to build a high speed rail from LA to San Francisco, and constructing a 144 mile section between Madera and Bakersfield without figuring out how, or if, you're going to be able to build the whole thing.

And like those politicians you scorn, you'll find yourself tempted to offer the same two siren-song mental justifications that get trotted out for irresponsible programs everywhere.

The first of the sirens is that you'll pay for the program by eliminating waste and duplication elsewhere. Doubt not that your life, much like the wretched DMV, is full of plenty of waste and duplication. But doubt it not as well that this waste and duplication will prove considerably harder to get rid of than you might have bargained for. If your plan for learning Spanish is 'I'll just stop wasting any time on the internet each day'... yeah, you're not going to get very far. Your system 2 desire to learn piano is like Arnie, and your desire to click on that blog is like the California Public Sector Unions - I know who my money's on. The amount of waste you can get rid of is probably not enough to fund very much activity at all. Just like in government.

The second siren is the desire to just run at a budget deficit. The area of deficit that almost always comes up is sleep. I'll just get up and hour earlier and practice the piano! Great - so are you planning to go to bed an hour earlier too? If so, we're back at square one, because something in the night's activities has to be cut. If not, do you really think that your glorious plan to switch from 8 hours a night to 7 hours a night, in perpetuity, is likely to prove feasible (absent long-term chemical assistance) or enjoyable (even with such assistance)? Every time I've tried, the answer has been a resounding 'no'. I say 'every time' advisedly, as this awful proposal manages to seem appealing again and again. You can in fact live on less sleep for extended periods - just ask parents with newborn children. It's also incredibly unpleasant to do so - just ask parents with newborn children. They'll do it because millions of years of evolutionary forces have caused them to feel such overwhelming attachment to their children that the sacrifice is worth it. And you propose to repeat the feat to learn the piano? That may seem like a great idea when you start out for the first night, fresh from a month of good sleeping. It seems like less of a good idea the next morning when your alarm goes off an hour earlier than usual. And I can assure you it almost certainly will not seem like a good idea after a month of being underslept, should you in fact get that far. Iterate forward, and don't start.

The real lesson is to only undertake things that you're actually willing to pay for. If you don't know what you're willing to give up, you don't actually know if you demand something, as opposed to merely want it. Confuse the two at your peril.

Wednesday, May 8, 2013

Hate Generalisations? You Probably Just Hate Statistics

One of the most oft-repeated nonsense claims by a certain type of low-wattage intellectual lefty is that one 'shouldn't generalise'. (For reasons that are worthy of a separate post', this seems to me to be reasonably correlated with people who also proudly announce that they 'don't judge').

Apparently, one of the Worst Things In The World you can do is to notice that information about the generality of a distribution may useful in predicting where a specific point in the distribution will lie.

For those people that don't like to 'generalise', I wonder what, if any, statistical measures they actually find interesting or legitimate.

What is an average, if not a statement that lets one generalise from a large number of data points to a concise summary property about all of the points combined? Or a standard deviation? Or a median?

The anti-generalisers tend to apply their argument ('assertion' is probably a better description) in two related ways, varying slightly in stupidity:

a) One should not summarise a range of data points into a general trend (e.g. 'On average, [Group X] commits murders at a higher rate than [Group Y]').

b) One should not use a general trend to form probabilistic inferences about a particular data point (e.g. 'Knowing statement a), if I also know that person A is in Group X, and person B is in Group Y, I should infer that person A has a higher probability of committing a murder than person B').

Version a) says you shouldn't notice trends in the world. Version b) says you shouldn't form inferences based on the trends you observe.

Both are bad in our hypothetical interlocutor's worldview, but I think version b) is what particularly drives them batty.

But unless you just hate Bayesian updating, the two statements flow from each other. b) is the logical consequence of a).

Now, this isn't a defence of every statement about the world that people make which cites claims a) and b). To a Bayesian, you have to update correctly.

You can have priors that are too wide, or too narrow.

You can make absurd mistakes that P(R|S) = P(S|R).

You can update too fast or too slowly based on new information.

And none of this has even begun to specify how you should treat the people you meet in life in response to such information.

None of my earlier statements are a defence of any of this. The first three are all incorrect applications of statistics. The last one is a question about manners, fairness, and how we should act towards our fellow man.

But there's nothing wrong with the statistical updating.

If your problem is with 'generalising', your problem is just some combination of 'the world we live in' and 'rationality'.

Suppose the example statements in a) and b) made you slightly uncomfortable. Let me ask you the following:

What groups X and Y did you have in mind when I spoke about the hypothetical murder trends example? Notice I didn't specify anything.

One possibility that you may be thinking I had in mind was that X = 'Blacks' and Y = 'Whites'. People don't tend to like talking about that one.

In actual fact, what I had in mind was X = 'Men' and Y = 'Women'. This one is not only uncontroversial, but it almost goes without saying.

As it turns out, both are true in the data.

Do inferences based on these two both make you equally uncomfortable? Somehow I doubt it.

And if they don't, you should be honest enough to admit that your problem is not actually with statistical updating, or 'generalisations'. It's just trying to launder some sociological or political concern through the action of browbeating the correct application of statistics.

So stop patronisingly sneering that something is a generalisation, and using that as an implied criticism of an argument or moral position. Otherwise zombie Pierre-Simon Laplace is going to come and beat yo' @$$ with a slide rule.

Tuesday, June 12, 2012

Fatties Are Optimising

Let me repeat a frequent refrain that I hear from smug skinny people from time to time:
Hur hur hur. Look at that fat guy at McDonalds, getting a super-sized Big Mac meal with a diet coke. As if the diet coke makes up for the huge number of calories he's consuming! What a moron!
Reader, I am here to tell you that at least along one metric, the fattie is behaving in an entirely rational way, and the skinny guy is in fact the moron for not understanding optimisation.

When I say 'rational', I don't mean that they are doing the optimal thing in some cosmic sense. Rather, I mean it in the classical applied microeconomics sense that they are maximising something.

So what exactly might they be maximising that would be consistent with their behaviour?

Consider that they get utility from eating equal to

U = Taste*Quantity

It's better to eat tastier stuff, and it's better to eat more of it. Not too controversial, right?

In addition, suppose they can't eat an unlimited amount - let's grant them a binding calorie budget for each meal. The exact budget doesn't actually matter for the analysis.

So the fatties want to eat as much tasty food as possible given a maximum total calorie allowance. How do they choose the amounts of foodstuffs to get to this point?

Well, if you work through the fairly simple constrained optimisation, the relevant metric for comparing across food items is the taste per unit calorie. This is a measure of how good the 'value' of each food is, if you think of calories like money. In other words:

"Value(Food X)" = Taste (Food X) /  Calories (Food X)

In equilibrium, you will want to allocate more consumption towards foods that deliver higher value, and reduce consumption in low value foods.* When faced between two foods, that's how you'll decide between them.

Let's add further the assumption that the person must have at least one food item and one drink item.

So how do you choose between the items?

Let's start by comparing Coke versus Diet Coke.

A 12 ounce can of Coke has 140 calories. Let's call it's tastiness = y.

A 12 ounce can of diet coke has, say, 1 calorie (it's closer to zero, but never mind). Let's say you find diet coke much worse than coke - it's only 30% as tasty, say.

So the value of coke is V(coke) = y/140
The value of diet coke is V(diet coke) = 0.3y/1

In other words, Diet Coke is 42 times better value than Coke.

Now let's compare a serve of fries relative to our equivalent of 'Diet Fries', say a Premium Southwest Salad with Chicken.

A large McDonalds French Fries has 500 calories. The salad has 290 calories.

But everyone knows that the salad is not 60% as tasty as the French Fries. At best, it's about a quarter as tasty.

In other words, the Salad is worse value than the large fries.

So the fatty that is rationally optimising the problem we've set out will choose the large fries and the diet coke, and ignore the southwest salad and the coke. This will give him more tasty food for the same amount of calories.

And this conclusion holds no matter what the calorie budget. It doesn't matter if you let the guy eat a huge meal - he's still better off ordering more fries and a diet coke. Coke has a much (calorie)-cheaper substitute than fries do for the same level of taste.

I think it's a mistake to assume that fatties don't care about being fat. My guess is that they care deeply about it. They just really like food.

And these are exactly the people whom I'd expect to figure out the optimal way to eat the most amount of tasty food for a given level of calories.

Frankly, if I only optimised over the things above, I'd eat McDonalds a lot more. It's tasty as hell, and doesn't even have that many calories. As we've seen, you can eat it every day and not necessarily get fat.

The only thing that stops me getting to this point is adding in a health cost to each item. If you care about your health (and on this front, I think it's safe to assume that fatties may not care as much), then you're more likely to pick the salad. But most people are unlikely to pick the salad based on the taste/calorie tradeoff alone. Unless they're idiots. In addition, Kevin Murphy's back-of-the-envelope calculations about how large the health costs of a hamburger are suggest that they're only in the range of $2-$3 per hamburger. What the costs are in terms of attractiveness, however, is another story.

But the bottom line is that it's the fattie at McDonalds who isn't ordering the diet coke who is more likely to be making the mistake. You're always better off ordering the diet coke and getting a larger fries instead.

*Technical aside: if you don't specify a declining function of taste with greater consumption (i.e. each fry tastes as good as the last), the equilibrium will be a corner solution - e.g. you only eat french fries. The fact that people tend to want a burger and fries suggests that taste declines with consumption, and thus the optimisation is at an interior solution. Fact.

Tuesday, June 5, 2012

Burn That Money!

I suspect I joined the ranks of the 'sufficiently wealthy to not sweat the small stuff' when I stopped noticing petrol prices very much. I would be vaguely aware of the total dollar amount when I filled up, but the choice of which petrol station to go to was largely dictated by 'do I need petrol right now?', rather than 'is this place 5 cents cheaper than the other place?'.

This is fine, and I can totally rationalise this to myself - reducing the number of trips to the petrol station is worth the extra dollar or two I might pay.

But I can cement exactly when the 'careless rich' threshold was reached, and it was yesterday. I was driving home along a route I don't normally take, and the fuel light was on. There are two petrol stations right next to each other on the same side of the road, a Shell and a 76. As I was approaching deciding which one to go to, I actually thought 'I like the Shell Logo better, let's go there'. It was only when I passed the 76 that I realised that I actually had the chance to look at the signs and go to the cheaper option at zero cost, but it had been so long since I'd done that that the thought didn't actually occur to me in time.

At this point, I was sufficiently embarrassed at myself, that I got flustered and drove past both. This made me feel like even more of a lame-o. I went to one further up the road - I have no idea if it was cheaper or more expensive.

Ha. I guess there's a good reason they call it 'Overcoming Bias' and 'Less Wrong', not 'Perfect Rationality'.

Athenios periodically accuses me of being 'part of the 1%', in his hilarious attempts to ignite class war nonsense, and I have no doubt this post will be catnip to him.

Monday, April 2, 2012

Things I've been doing instead of writing blog posts

Reading up on the writings of Mencius Moldbug.I'm about halfway through the 'How Richard Dawkins Got Pwned' essays, where he claims that modern Universalist philosophy (what Dawkins calls 'Einsteinian Religion) can best be described as 'nontheistic Christianity', and part of the same progression of ideas from the Puritans. Interesting stuff.

The good news is, his writing is excellent!

The bad news is that it's time for dinner and bed, so go read his stuff. I haven't found writing on this theme that's this interesting since Eliezer Yudkowsky finished his daily sequences of posts on rationality, found in the archives of Overcoming Bias.