Discover more from sebjenseb
Methodology for DIE IQ estimations
In the anime death note, L is sure that Light is Kira but cannot prove it at a logical level, as his conclusions and thought processes are too intuitive and complicated to articulate to his peers. In a similar vein, I can be certain that somebody is very intelligent, but in a way that is difficult to prove at a logical or explicit level . I think this is the biggest issue I had with the criticisms people levied at my estimations - they often made arbitrary assessments in counter to my mathematical estimates, or they treated my mathematical estimates as they would treat an arbitrary assessment.
This is the issue I have with determining the IQ of somebody like John Von Neumann. It is likely, based on his extensive biography, that he is one of the most intelligent people that ever lived. However, it is difficult to articulate this mathematically. Yes, he could divide two 8 digit numbers in his head at the age of 6 - clearly a sign of genius - but it is difficult to estimate exactly how smart that makes him, as there is no reference population to compare him to.
To generate a consistent and mathematical system of inference for determining the most likely IQ of an individual, a robust system of norms should be established. This is because unstated procedural outcomes are easy to manipulate to achieve the desired goals of the estimator. To avoid this flaw, I will post the following rules:
In cases where there are more than 1 indicator used, simulations are used to generate estimates. This is done by generating a distribution of IQ scores, then estimating distributions of wealth/educational attainment/income etc. based on those IQ scores, then averaging the IQ scores of individuals with roughly identical indicators as the one who I’m estimating. Due to computational constraints, what counts as an individual with roughly identical indicators usually rests between -0.1 and 0.1 standard deviations of the indicators, which causes the estimates to be slight underestimates - of maybe 1 or 2 IQ points on average.
The assumed validities of the following estimates are: normal IQ test: .9 reasonable online IQ test (e.g. mensa.no, ICAR16): .7, anything else: .6.
SAT percentiles will be calculated using the most close nationally representative data. If that is not available, the SD will be calculated with a calculator and the mean will be adjusted downwards to account for the national representativeness.
Gaming skill is assumed to correlate at .45 with IQ regardless of the game involved. This is to avoid subjective judgements1 of a game’s g-loading from altering the estimates of individuals too much. The value of .45 was chosen arbitrarily, but it is done to reflect various effect sizes    - specifically the correlation between LoL rank and matrix reasoning (.44), correlation between g and the average g-loading of the general factor of gaming (.39), and chess elo/IQ in unranked samples (.32). I chose a slightly higher value than what you would expect because I believe that the correlations generated in the literature are biased downwards, as elo/rank is not always a reliable estimate of an individual’s skills. If only elo/rank is available, the assumed correlation will be changed to .4.
Gaming skill independent of rank is assessed from a combination of my eye test, community consensus, achievement, and expert opinion. Generally, these assessments won’t change the final score very much, so I will be fairly liberal about them.
Two metrics that correlate with IQ for similar reasons (e.g. wealth and income) will not be used to estimate the same person’s IQ. If multiple metrics belong to the same nexus of traits (e.g. gaming skill), a maximum of 2 metrics will be used.
Self-reported IQ scores will be taken at face value if it is taken from a person who is trustable and is not aware that the number will be used for DIE. Trust is assessed subjectively - for instance, Curtis Yarvin and Steve Sailer are assumed to be trustable, Andrew Tate and Donald Trump are not. Generally, self reports of things like height/weight/GPA tend to be pretty accurate, but with a little noise in the expected directions.
Game z-scores are inferred using the number of active players (if it is an active game), number of accounts multiplied by 0.1.
Following regression coefficients are used: achievement test→ 0.84, achievement subtest → 0.78, GPA/class rank → 0.6 (check moderators and look at mixed), educational attainment→0.55, income and wealth→0.35, gaming skill → 0.45, gaming rank → 0.4. Coefficients that are a little higher than what is found in the literature as there is some measurement error in both constructs due to false reports and unreliability.
I’ve debated on whether to regress the individuals to a higher number than their racial mean given that the fact they are here to begin with suggests that they probably will not regress as much as the average joe. I’ve decided against it, as it’s too much work and that there is the self-inflation bias (e.g. reporting the better scores, revealing more positive information about yourself) as well, which is essentially impossible to adjust for.
IQ scores close to the ceiling on long/validated IQ tests (e.g. WAIS) are assumed to be 150 if the score exceeds this number. If they hit the ceiling, then it is assumed to be 160. This does not apply to the SAT due to its very high ceiling.
1 - my own assessment of the g-loadings of various games:
General factor of gaming - 0.65 (believe it or not, I guessed this value before I saw this study)
Starcraft 2 - 0.5
LoL (jungler), Warcraft 3 - 0.48
Starcraft Brood war - 0.46
LoL (support) - 0.45
LoL (mid), Super Smash Bros. Melee - 0.44
Dota 2, HOTS, Overwatch (tank/support), chess - 0.43
LoL (top) - 0.41
Poker, Hearthstone, Fortnite - 0.37
Overwatch (DPS), LoL (ADC), osu - 0.35
Counter Strike, Call of Duty - 0.33
I think that the g-loading of a game is mostly based on 5 things:
skill floor (lower skill floor - higher g-loading as it discriminates better in lower IQ samples)
skill ceiling (higher skill ceiling - higher g-loading as it discriminates better in high IQ samples)
how many abilities it tests (more abilities - higher g-loading)
how strategically demanding the game is
dependence on multiple choice reaction time (more dependence - higher g-loading)
For instance, Sc2 is in a unique position where it is average in terms of skill floor, high in terms of ceiling, tests a large amount of abilities, and is highly strategically demanding. Because of this, I hypothesize that it is the most g-loaded game of them all. I think it is entirely possible that measurement error or publication bias mean that the true g-loading of gaming is reasonably higher or lower than what is currently available, but made up statistics are more useful than nonexistent ones.
Some may say the specific role estimates are unfairly biased against DPS and ADC players - for the record I play both of these roles, I just happen to believe those skills don’t depend on intelligence very much.