What IQ stands for
IQ stands for Intelligence Quotient. The term was coined by the German psychologist William Stern in 1912. Stern proposed a formula: divide a child's "mental age" (as measured by Alfred Binet's tests) by their chronological age and multiply by 100. A 10-year-old who performed like a typical 12-year-old had an IQ of 120. A 10-year-old performing like an 8-year-old scored 80.
The formula: IQ = (Mental Age ÷ Chronological Age) × 100
This ratio method had an obvious flaw: it only made sense for children, since mental age levelled off in adulthood while chronological age kept climbing. A 30-year-old with the same cognitive ability as a 25-year-old would have an IQ of 83 — not because they were less intelligent, but because the formula broke down.
The ratio IQ was abandoned in the 1960s. All modern IQ scores are deviation IQs: they express how far a person's performance deviates from the average for their age group, measured in standard deviation units. The mean is set to 100 and the standard deviation to 15. A score of 115 means you performed one standard deviation above the average for your age peers — regardless of how old you are.
What IQ tests actually measure
IQ tests measure a specific set of cognitive abilities: abstract reasoning, pattern recognition, verbal comprehension, numerical reasoning, working memory capacity, and processing speed. They do not measure all of intelligence — they measure the subset of cognitive abilities that can be assessed efficiently in a standardised test format.
The abilities measured by IQ tests tend to correlate with each other. Someone who scores well on verbal comprehension tends to also score well on numerical reasoning and abstract pattern recognition. Charles Spearman first documented this correlation in 1904 and proposed a single underlying factor he called g — general intelligence. Whatever g is at the biological level (researchers still debate this), it is the most replicated and best-validated construct in all of cognitive psychology.
Modern cognitive science typically describes intelligence in terms of the Cattell-Horn-Carroll (CHC) model: a hierarchy with g at the top, broad abilities (like fluid intelligence and crystallised intelligence) at the second level, and narrow specific abilities (like spatial visualisation or lexical knowledge) at the base. This model underlies all current major clinical IQ tests including the WAIS-IV and the Stanford-Binet 5.
A brief history of IQ testing
Alfred Binet developed the first practical intelligence test in 1905, commissioned by the French government to identify children who needed additional educational support. Binet was explicit that his test measured current cognitive performance, not innate or fixed ability — and he worried it would be misused to label children as permanently limited.
His fears proved justified. The test was imported to the United States by Henry Goddard and later Lewis Terman, who refashioned it as a measure of innate, hereditary intelligence. Army Alpha and Beta tests were administered to approximately 1.75 million World War I recruits, generating the first mass IQ dataset — which was then used to justify immigration restrictions, sterilisation programmes, and racial hierarchies. This is a dark chapter of the field's history that should not be minimised.
Contemporary psychometrics has largely — though not entirely — moved beyond these abuses. Modern tests are normed on representative population samples, updated on regular cycles, and used primarily for educational and clinical purposes: identifying learning disabilities, assessing cognitive decline, evaluating gifted programme eligibility, and informing treatment planning in clinical settings.
The IQ scale: what scores actually mean
IQ scores follow a roughly normal (bell-shaped) distribution in the population. The mean is 100 and the standard deviation is 15. About 68% of people score between 85 and 115 — within one standard deviation. About 95% score between 70 and 130 — within two. Fewer than 2.5% score above 130 or below 70.
The table below gives the full breakdown of IQ ranges, their classifications, and what proportion of the population falls in each band. The percentile column answers "what percentage of people score at or below this level."
| Score range | Classification | Percentile range | % of population | Proportion |
|---|---|---|---|---|
| 145+ | Profoundly gifted | 99.9th+ | <0.1% | |
| 130–144 | Very Superior / Gifted | 98th–99.9th | 2.2% | |
| 120–129 | Superior | 91st–98th | 6.7% | |
| 110–119 | High Average | 75th–91st | 16.1% | |
| 90–109 | Average | 25th–75th | 50.0% | |
| 80–89 | Low Average | 9th–25th | 16.1% | |
| 70–79 | Borderline | 2nd–9th | 6.7% | |
| Below 70 | Extremely Low | Below 2nd | 2.2% |
One important point: a score in the "Average" range (90–109) is entirely normal and not a cause for concern. This range covers half the population. The media's fixation on genius-level scores creates a distorted impression — 100 is not a disappointing result. It is exactly what the average person scores, by definition.
What IQ predicts — and what it doesn't
IQ is the best single predictor of academic achievement, with correlations typically in the range of r = 0.5–0.6. It is also the strongest available predictor of job performance, particularly in cognitively complex roles (meta-analyses by Schmidt & Hunter, 1998, placed the correlation at approximately r = 0.5 for general mental ability and job performance).
IQ correlates with income, health literacy, longevity, and the ability to navigate complex bureaucratic and medical systems. These correlations are real and practically significant. But an r of 0.5 means IQ accounts for 25% of the variance in outcomes — genuinely important, but leaving 75% explained by other factors. Conscientiousness, interpersonal skills, work ethic, circumstance, and opportunity all contribute substantially to real-world achievement.
The relationship between IQ and creative eminence — the kind of exceptional output that defines historically significant figures — is particularly weak above a threshold of approximately IQ 120. Dean Keith Simonton's research on genius consistently finds that beyond that level, personality factors (especially openness to experience, curiosity, and tolerance for ambiguity) predict creative achievement better than additional IQ points.
Fluid and crystallised intelligence
Modern IQ theory distinguishes between two major components of general intelligence, first proposed by Raymond Cattell and John Horn in the 1960s:
Fluid intelligence (Gf) is the capacity for novel problem-solving — reasoning through problems you have never seen before, without relying on accumulated knowledge. Matrix reasoning tasks (spotting patterns in abstract shapes) are the clearest measure of Gf. Fluid intelligence peaks in the mid-20s and declines gradually from there.
Crystallised intelligence (Gc) is the accumulation of knowledge, vocabulary, and learned skills over a lifetime. Verbal comprehension and factual knowledge tests measure Gc. Unlike fluid intelligence, crystallised intelligence continues to grow into the 60s and 70s for most people before declining. This is why older adults often outperform younger adults on knowledge-heavy tests despite lower fluid reasoning scores.
Understanding this distinction matters for interpreting IQ scores at different life stages. A 25-year-old and a 60-year-old may achieve similar FSIQ scores through very different profiles — the younger person leading on fluid reasoning, the older on crystallised knowledge. For more on this, see our article on Fluid vs Crystallized Intelligence.
The Flynn Effect: IQ scores are rising
One of the most important findings in IQ research is the Flynn Effect: average IQ scores have risen substantially across generations in many countries — by approximately 3 points per decade throughout the 20th century, documented by researcher James Flynn in 1987. Some developing nations undergoing rapid economic change have shown even larger gains.
This rise is far too rapid to have a genetic explanation. In some countries, average scores rose by 20–30 points in a single generation — a change that would require an impossible shift in gene frequencies. The Flynn Effect reflects environmental improvements: better nutrition (especially iodine), reduced infectious disease burden, longer and better-quality formal education, greater familiarity with abstract visual-spatial thinking through media and technology, and reduced exposure to neurotoxins like lead.
The implications are significant: IQ is not fixed, environmental factors powerfully shape test performance, and countries that have invested in education and nutrition have seen measurable cognitive gains at the population level. For a full treatment of the Flynn Effect, including gains by region, see our article on Average IQ by Country.
Heritability: what it means and what it doesn't
IQ heritability estimates in Western adults typically range from 0.5 to 0.8 — meaning 50–80% of the variation in IQ scores within those populations can be attributed to genetic differences between individuals. This is one of the most consistently replicated findings in behavioural genetics, supported by twin studies, adoption studies, and molecular genetic analyses.
The Minnesota Study of Twins Reared Apart — one of the most influential studies in this area — followed 137 pairs of identical twins who had been separated and raised in different families. It found a correlation in IQ scores of approximately 0.70, suggesting very substantial genetic influence even when shared family environment was absent.
But heritability is one of the most widely misunderstood concepts in science. Several critical caveats:
Heritability is a population statistic, not a property of individuals. Saying IQ is 70% heritable does not mean 70% of your IQ comes from genes and 30% from environment. It means that in a specific population under specific environmental conditions, 70% of the variation between people's scores was attributable to genetic differences. Change the environment and the heritability estimate changes.
High heritability does not mean unresponsive to environment. Height is approximately 80–90% heritable in well-nourished Western populations, yet the Dutch grew an average of 20 cm in height over the 20th century purely through improved nutrition. The same logic applies to IQ — the Flynn Effect is proof that IQ responds dramatically to environmental change even when heritability within a population is high.
Heritability within groups says nothing about differences between groups. The classic demonstration: plant genetically identical seeds in nutrient-poor soil and nutrient-rich soil. Within each group, height variation is entirely genetic (heritability = 1.0). But the difference between groups is entirely environmental. High heritability within a population cannot be used to conclude that differences between populations are genetic.
What about multiple intelligences?
Howard Gardner's Theory of Multiple Intelligences (1983) proposed eight distinct intelligences — linguistic, logical-mathematical, musical, bodily-kinesthetic, spatial, interpersonal, intrapersonal, and naturalist. It is enormously popular in education, used to justify differentiated instruction and to give every child a domain of "intelligence."
The psychometric evidence for Gardner's framework is weak. The proposed intelligences tend to correlate positively with each other — precisely what you would expect if a general factor g underlies them, rather than independent modules. Gardner has not produced validated, reliable assessment instruments for his framework. Mainstream cognitive psychologists view it as an interesting heuristic with limited scientific support.
The Cattell-Horn-Carroll (CHC) model provides a far more empirically grounded account of human cognitive abilities. It describes a hierarchy: g at the top, broad abilities (Gf, Gc, Gsm, Gv, Ga, and others) at the second level, and many narrow specific abilities beneath. All major current clinical IQ tests are designed around CHC theory. It acknowledges different cognitive profiles while still recognising that a general factor underlies and connects them.
Can IQ change?
Yes — more than most people assume. IQ is not a fixed biological constant stamped at birth. In childhood and adolescence, when the brain is still developing rapidly and the environmental contributions to IQ are largest, scores can and do change substantially. A child with a score of 95 at age 8 may score 115 at 16 if they receive high-quality education, adequate nutrition, and a stimulating environment.
In adulthood the picture is more stable, but still not fixed. The Flynn Effect demonstrates that average scores shift upward across generations as environments improve. For individuals, the most evidenced ways to maintain and improve cognitive performance in adulthood are sustained formal education or intellectually demanding work, regular aerobic exercise (which increases BDNF and improves executive function), adequate sleep (chronic deprivation measurably reduces performance), and ensuring no nutritional deficiencies are present. None of these produce dramatic transformations overnight — but they represent real, scientifically evidenced mechanisms for keeping the cognitive abilities you have operating at their best. For a full breakdown, see our article on How to Increase Your IQ.
What about online IQ tests?
Most free online IQ tests are entertainment products, not valid psychometric instruments. They use small question sets biased toward easy visual-spatial items, apply non-representative norms, and are calibrated to report inflated scores — because high scores are shared on social media more than accurate ones. Any test that reports a general population average above 115 is demonstrably miscalibrated.
The markers of a more honest online test: it uses adaptive question selection to efficiently estimate ability across a wide range, applies IRT (Item Response Theory) scoring rather than simple right-minus-wrong, reports a confidence interval alongside the score (reflecting real measurement uncertainty), and does not claim the result is equivalent to a clinical assessment. For more detail on evaluating online tests, see How Accurate Are Online IQ Tests?
AurorIQ uses IRT scoring calibrated against a representative norm group. We report confidence intervals and are explicit that our results are estimates rather than clinical assessments. If you need a clinically validated IQ score — for educational placement, disability assessment, or Mensa application — you need a licensed psychologist and a validated clinical instrument like the WAIS-IV or Stanford-Binet 5.
The honest bottom line
IQ is a real, measurable construct that predicts important outcomes. It is not fixed, it is not the only thing that matters, and it does not define you as a person. The score reflects a specific sample of cognitive abilities measured under specific conditions — not your worth, your ceiling, or your destiny.
A high IQ is a genuine advantage in cognitively demanding work and in navigating complex systems. A moderate IQ combined with strong conscientiousness, effective habits, and good judgment will take most people further than a high IQ with poor execution. And a low score — especially from an online test without proper norming — is not a reliable indicator of anything that should change your self-conception or your ambitions.
The number is a tool. Like any tool, its value depends on using it correctly — with an understanding of what it measures, what it doesn't, and where its real limits are.
Frequently asked questions
What is a good IQ score?
An IQ of 100 is exactly average by design. Scores of 90–110 are the average range and cover 50% of the population — a score of 100 is entirely normal, not disappointing. Scores of 110–120 are high average (top 25%), 120–130 are superior (top 9%), and above 130 is very superior or gifted (top 2.3%). A "good" score depends on context: for most everyday purposes, 100 is completely adequate; for highly cognitively demanding professions like medicine or engineering, higher scores are more common but not required.
What does IQ actually measure?
IQ tests measure abstract reasoning, pattern recognition, verbal comprehension, numerical reasoning, working memory, and processing speed — abilities that correlate with a single underlying factor called g (general intelligence). They do not measure creativity, emotional intelligence, wisdom, practical skills, motivation, character, or most of what determines real-world success. IQ tests are powerful within their defined scope and limited outside it.
What is the average IQ score?
The average IQ score is exactly 100, by design. IQ tests are standardised so that 100 is always the population mean for the norming group. Approximately 68% of people score between 85 and 115, and 95% score between 70 and 130. The average is recalibrated with each test norming update — which is why scores in different eras aren't directly comparable without accounting for the Flynn Effect.
Can IQ change over time?
Yes. IQ is not fixed at birth. In childhood and adolescence, scores can change substantially in response to educational quality, health, nutrition, and developmental progress. The Flynn Effect — average IQ rising ~3 points per decade globally throughout the 20th century — is proof that IQ responds powerfully to environmental conditions. In adulthood, scores are more stable but still affected by exercise, sleep quality, ongoing education, and the absence of nutritional deficiencies.
What is the IQ score range?
The IQ scale has a mean of 100 and a standard deviation of 15. Practical scores range from below 70 (extremely low range, bottom 2%) to above 130 (very superior, top 2%). Scores above 145 are sometimes called "profoundly gifted" and occur in fewer than 0.1% of the population. About 95% of people score between 70 and 130. The classifications — average, high average, superior — are conventional labels, not fixed categories, and different test publishers use slightly different terminology.
Is IQ mostly determined by genetics?
IQ heritability in Western adults is approximately 50–80%, meaning that in those populations, 50–80% of the variation in scores between individuals is attributable to genetic differences. But heritability does not mean "fixed at birth." It is a population statistic that changes when environments change. The Flynn Effect — scores rising 3 points per decade through environmental improvement — proves that IQ is highly responsive to environmental conditions even when within-population heritability is high. High heritability within a population also says nothing about differences between populations.