An intelligence quotient, or IQ, is a score derived from one of several different standardized tests designed to assess intelligence. The term "IQ," from the German Intelligenz-Quotient, was devised by the German psychologist William Stern in 1912 as a proposed method of scoring children's intelligence tests such as those developed by Alfred Binet and Théodore Simon in the early 20th Century.[1]Lewis Terman accepted that form of scoring, expressing a score as a quotient of "mental age" and "chronological age," for his revision of the Binet-Simon test,[1] the first version of the Stanford-Binet Intelligence Scales. Show
Although the term "IQ" is still in common use, the scoring of modern IQ tests such as the Wechsler Adult Intelligence Scale is now based on standard scoring of the subject's rank order on the test item content with the median score set to 100, and a standard deviation of 15, although not all tests adhere to that assignment of 15 IQ points to each standard deviation. IQ scores have been shown to be associated with such factors as morbidity and mortality, parental social status,[2] and, to a substantial degree, parental IQ. While the heritability of IQ has been investigated for nearly a century, controversy remains regarding the significance of heritability estimates,[3][4] and the mechanisms of inheritance are still a matter of some debate.[5] IQ scores are used in many contexts: as predictors of educational achievement or special needs, by social scientists who study the distribution of IQ scores in populations and the relationships between IQ score and other variables, and as predictors of job performance and income. The average IQ scores for many populations have been rising at an average rate of three points per decade since the early 20th century, a phenomenon called the Flynn effect. It is disputed whether these changes in scores reflect real changes in intellectual abilities, or merely methodological problems with past or present testing. HistoryPsychometricsEnglishman
Francis Galton, influenced by Darwinism, founded eugenics (and later psychometrics) to measure differences between upper
and lower classes.[6] Galton argued that due to heredity, white aristocrats were intellectually superior to other humans; nurture, meanwhile, played a lesser role in intellectual capacity.[7] Clinical diagnosticsFrench psychologist Alfred Binet gave more weight to nurture, arguing that intelligence could be improved.[8] Binet and fellow French psychologist Théodore Simon later developed the Binet-Simon test for measuring intellectual development or mental age to diagnose French children in need of special assistance classes.[9] Binet-Simon tasks were diverse in order to neutralize individual differences in types of intelligence and provide a general measure. Intelligence testingEnglish psychologist Charles Spearman, however, did not believe there were different types of
intelligence.[10] He devised a correlational formula to define a common intellective factor (which he called "general intelligence"), a factor that Binet argued did not
exist.[11] Intelligence quotient (IQ)In 1910 the eugenics movement in the USA seized on the Binet-Simon test as means to give them credibility in diagnosing mental retardation after American psychologist Henry H. Goddard published a translation of it that same year.[12] American psychologist Lewis Terman revised the Binet-Simon scale, implementing German
psychologist William Stern's intelligence quotient (I.Q.) ratio of mental age to chronological age and used the resulting Stanford-Binet scale as a measure of general
intelligence.[13] World War I: large scale testingA team of psychologists led by eugenicist Robert Yerkes assisted the US army to rapidly assess and assign huge numbers of personnel.[14] They developed two group-administered intelligence tests for this purpose: an alpha test for literate personnel; and a beta for the illiterate. They successfully tested 1,726,000 recruits.[15] However, there was too little time for validity testing[16] and not enough staff to administer tests correctly.[17] Many officers distrusted the psychologists, accusing them of conducting research for their own purposes[17] using culturally biased tests.[18] Orders were issued that low test scores should not bar men from officer training and that a disability board should consider all discharges.[19] The outcome was that 0.5% of recruits were discharged as mentally inferior, whilst Yerkes would have preferred to have discharged 3% whose results showed a mental age of under 10.[15] In contrast, another team of psychologists led by Walter Dill Scott defined intelligence as a diverse complex of capacities, as Binet had. They stressed an individual differences approach
emphasizing environmental adjustments of mental qualities as functions.[11] Soon they developed a rating scale to classify and place enlisted men according to how they used their intelligence, not their base levels of
intelligence.[18] The army was enthusiastic about Scott's classification of personnel. By the time the war ended they had already been incorporated into the
military[17] and had classified 3½ million men and assigned 973,858 to technical units.[15] Post war to 1929The army abolished Yerkes' team after the war but employed two psychologists to continue intelligence testing research.[17] However, a great deal of positive post war publicity on army psychological testing helped to make psychology a respected field.[20] Subsequently there was an increase in jobs and funding in psychology.[21] Group intelligence tests were developed for and became widely used in both primary and secondary schools, universities and
industry.[17] But controversy followed when one of Yerkes' team, Goddard, admitted that they had been guilty of bad logic in finding that 45% of army recruits had a mental age of 12 or
less.[22] In addition, there was no agreement on what the intelligence tests
measured,[23][24][25] and the
publication of racial differences in scores[26] led to much criticism that environmental influences on score had not been
considered.[23][27] Wechsler's innovation in scoringThe modern IQ scale is a mathematical transformation of a raw score on an IQ test, based on the rank of that score in a normalization sample. This procedure was pioneered by David Wechsler. Modern scores are sometimes referred to as "deviation IQs," while older method age-specific scores are referred to as "ratio IQs."[29] IQ scales of either kind are ordinal scales and thus IQ points are not units of measurement.[30][31][32][33] The two scoring methodologies yield similar results near the population median, but the older ratio IQs yielded far higher scores for high-scoring (gifted) children. Current IQ tests do not yield such high scores, as recent validation studies have shown that concerns about ceiling effects in identifying gifted children are exaggerated.[34] In any event error of estimation is greatest for scores above the population median[35] and likely to overestimate the true score (in the sense of formal testing theory) of a test-taker who obtains a score above the median.[36] Since the publication of the Wechsler Adult Intelligence Scale (WAIS), almost all intelligence scales have adopted the deviation method of scoring. The use of this scoring method makes the term "intelligence quotient" an inaccurate description, mathematically speaking, but the term "IQ" still enjoys colloquial usage,[37] and is used to label the standard scoring of most cognitive ability tests currently in use. ReliabilityIQ scores can differ for the same individual even on the most sophisticated current IQ scales. (IQ score table data and pupil pseudonyms adapted from description of KABC-II norming study cited in Kaufman 2009.[38])
Psychometricians generally regard IQ tests as highly reliable (in the technical sense of testing theory) and clinical psychologists generally regard them as having sufficient validity for many clinical purposes. A test-taker's score on any one IQ test is surrounded by an error band that shows, to a specified degree of confidence, what the test-taker's true score (in that same technical sense) is likely to be. Test-takers can have varying scores on differing
occasions of taking IQ tests and can vary in scores on different IQ tests taken at the same
age.[39][40][41] The General Intelligence Factor (g)There are many different kinds of IQ tests using a wide variety of methods. Some tests are visual, some are verbal, some tests only use of abstract-reasoning problems, and some tests concentrate on arithmetic, spatial imagery, reading, vocabulary, memory or general knowledge.[42] The psychologist Charles Spearman early this century made the first formal factor analysis of correlations
between the tests. He found that a single common factor explained for the positive correlations among tests. This is an argument still accepted in principle by many psychometricians. Spearman named it g for "general intelligence factor." In any collections of IQ tests, by definition the test that best measures g is the one that has the highest correlations with all the others. Most of these g-loaded tests typically involve some form of abstract reasoning. Therefore Spearman and
others have regarded g as the perhaps genetically determined real essence of intelligence. This is still a common but not proven view. Other factor analyses of the data with different results are possible. Some psychometricians regard g as a statistical artifact. The accepted best measure of g is Raven's Progressive Matrices which is a test of
visual reasoning.[42] Trends in IQMain article: Flynn effect Since the early twentieth century, raw scores on IQ tests have increased in most parts of the world.[43][44][45] When a new version of an IQ test is normed, the standard scoring is set so that performance at the population median results in a score of IQ 100. The phenomenon of rising raw score performance means that if test-takers are scored by a constant standard scoring rule, IQ test scores have been rising at an average rate of around three IQ points per decade. This phenomenon was named the Flynn effect in the book The Bell Curve after James R. Flynn, the author who did the most to bring this phenomenon to the attention of psychologists.[46][47] Researchers have been exploring the issue of whether the Flynn effect is equally strong on performance of all kinds of IQ test items, whether
the effect may have ended in some developed nations, whether or not there are social subgroup differences in the effect, and what possible causes of the effect might be.[48] Flynn's observation has prompted much new research in psychology and "demolish some long-cherished beliefs, and raise a number of
other interesting issues along the way."[44] Environmental and genetic influences and malleabilityEnvironmental influencesEnvironmental factors play a role in determining IQ. Proper childhood nutrition appears critical for cognitive development; malnutrition can lower IQ. For example, iodine deficiency causes a fall, in average, of 12 IQ points.[49] It is expected that average IQ in third world countries will increase dramatically if the deficiencies of iodine and other micronutrients are eradicated. A recent study found that the FADS2 gene, along with
breastfeeding, adds about seven IQ points to those with the "C" version of the gene. Those with the "G" version of the FADS2 gene see no
advantage.[50][51]Musical training in childhood may also increase
IQ.[52] Recent studies have shown that training in using one's working memory may increase
IQ.[53][54] Family environmentSome studies in the developed world have shown that inherited personality traits cause non-related children raised in the same family ("adoptive siblings") to be as different as children raised in different families.[55][not in citation given][56] There are some family effects on the IQ of children, accounting for up to a quarter of the variance; however, by adulthood, this correlation approaches zero.[57] For IQ, adoption studies show that, after adolescence, adoptive siblings are no more similar in IQ than strangers (IQ correlation near zero), while full siblings show an IQ correlation of 0.6. Twin studies reinforce this pattern: monozygotic (identical) twins raised separately are highly similar in IQ (0.86), more so than dizygotic (fraternal) twins raised together (0.6) and much more than adoptive siblings.[55][not in citation given] However these results make no correction for the social and emotional effects frequently associated with adoption. Possible bias in older studiesStoolmiller (1999)[58] found that the range restriction of family environments that goes with adoption, that adopting families tend to be more similar on, for example, socio-economic status than the general population, suggests a possible underestimation of the role of the shared family environment in previous studies. Corrections for range correction applied to adoption studies indicate that socio-economic status could account for as much as 50% of the variance in IQ.[58] However, the effect of restriction of range on IQ for adoption studies was examined by Matt McGue and colleagues, who wrote that "restriction in range in parent disinhibitory psychopathology and family socio-economic status had no effect on adoptive-sibling correlations [in] IQ".[59] Eric Turkheimer and colleagues (2003)[60] studied the "Massive IQ Gains in 14
Nations: What IQ Tests Really Measure" Maternal (fetal) environmentA meta-analysis, by Devlin and colleagues in Nature (1997),[5] of 212 previous studies evaluated an alternative model for environmental influence and found that it fits the data better than the 'family-environments' model commonly used. The shared maternal (fetal) environment effects, often assumed to be negligible, account for 20% of covariance between twins and 5% between siblings, and the effects of genes are correspondingly reduced, with two measures of heritability being less than 50%. Bouchard and McGue reviewed the literature in 2003, arguing that Devlin's conclusions about the magnitude of heritability is not substantially different from previous reports and that their conclusions regarding prenatal effects stands in contradiction to many previous reports.[63] They write that:
Dickens and Flynn modelDickens and Flynn[64] postulate that the arguments regarding the disappearance of the shared family
environment should apply equally well to groups separated in time. This is contradicted by the Flynn effect. Changes here have happened too quickly to be explained by genetic heritable adaptation. This paradox can be explained by observing that the measure "heritability" includes both a direct effect of the
genotype on IQ and also indirect effects where the genotype changes the environment, in turn affecting IQ. That is, those with a higher IQ tend to seek out stimulating environments that further increase IQ. The direct effect can initially have been very small but feedback loops can create large differences in IQ. In their model an environmental stimulus can have a very large effect on IQ, even in adults, but
this effect also decays over time unless the stimulus continues (the model could be adapted to include possible factors, like nutrition in early childhood, that may cause permanent effects). The Flynn effect can be explained by a generally more stimulating environment for all people. The authors suggest that programs aiming to increase IQ would be most likely to produce long-term IQ gains if they taught children how to replicate outside the program the kinds of cognitively demanding experiences
that produce IQ gains while they are in the program and motivate them to persist in that replication long after they have left the program.[64][65] Genetic influencesVarious studies find the heritability of IQ between 0.4 and 0.8 in the United States;[66][67][68] that is, depending on the study, a little less than half to substantially more than half of the variation in IQ among the children studied was estimated to be due to genetic variation. Heritability measures the proportion of variation that can be attributed to genes within any measured population (however defined), and not the extent that genes contribute to intelligence. It should also be noted that the idea that genetic and environmental influences on intelligence are independent is not necessarily accepted, or as Jeremy Freese says "The lucidity of Lewontin's arguments has historically proven no match for the allure of overly simple characterizations of outcomes as being x% due to genes and (1 – x)% not due to genes". That means that genes affect environment and environment affects genes.[69][70][71][72][73][74] A heritability in the range of 0.4 to 0.8 implies that IQ is "substantially" heritable, that is, some substantial part of the variation within a population is caused at least in part by genes. The effect of restriction of range on IQ was examined by Matt McGue and colleagues, who wrote that "restriction in range in parent disinhibitory psychopathology and family SES had no effect on adoptive-sibling correlations ... IQ."[59] On the other hand, a 2003 study by Eric Turkheimer, Andreana Haley, Mary Waldron, Brian D'Onofrio, Irving I. Gottesman demonstrated that the proportions of IQ variance attributable to genes and environment vary with socioeconomic status. They found that in impoverished families, 60% of the variance in IQ "in a sample of 7-year-old twins" is accounted for by the shared environment, and the contribution of genes was close to zero.[75] It is reasonable to expect that genetic influences on traits like IQ should become less important as one gains experiences with age. However, the opposite occurs.[76] Heritability measures in infancy are as low as 20%, around 40% in middle childhood, and as high as 80% in adulthood.[76] The American Psychological Association's 1995 task force on "Intelligence: Knowns and Unknowns" concluded that within the white population the heritability of IQ is "around .75." The Minnesota Study of Twins Reared Apart, a multiyear study of 100 sets of reared-apart twins which was started in 1979, concluded that about 70% of the variance in IQ was found to be associated with genetic variation. Some of the correlation of IQs of twins may be a result of the effect of the maternal environment before birth, shedding some light on why IQ correlation between twins reared apart is so robust.[5] There are a number of points to consider when interpreting heritability:
IQ and the brainIn 2004, Richard Haier, professor of psychology in the Department of Pediatrics and colleagues at University of California, Irvine and the University of New Mexico used MRI to obtain structural images of the brain in 47 normal adults who also took standard IQ tests. The study demonstrated that general human intelligence appears to be based on the volume and location of gray matter tissue in the brain, and also demonstrated that, of the brain's gray matter, only about 6 percent appeared to be related to IQ.[78][non-primary source needed] Many different sources of information have converged on the view that the frontal lobes are critical for fluid intelligence. Patients with damage to the frontal lobe are impaired on fluid intelligence tests (Duncan et al. 1995). The volume of frontal grey (Thompson et al. 2001)[verification needed] and white matter (Schoenemann et al. 2005)[verification needed] have also been associated with general intelligence. In addition, recent neuroimaging studies have limited this association to the lateral prefrontal cortex. Duncan and colleagues (2000) showed using Positron Emission Tomography that problem-solving tasks that correlated more highly with IQ also activate the lateral prefrontal cortex.[non-primary source needed] More recently, Gray and colleagues (2003) used functional magnetic resonance imaging (fMRI) to show that those individuals that were more adept at resisting distraction on a demanding working memory task had both a higher IQ and increased prefrontal activity. For an extensive review of this topic, see Gray and Thompson (2004).[79] A study involving 307 children (aged from six to nineteen) measuring the size of brain structures using magnetic resonance imaging (MRI) and measuring verbal and non-verbal abilities has been conducted (Shaw et al. 2006).[80][non-primary source needed] This study indicated that higher intelligence is not directly related to cortical thickness, but rather the change in cortical thickness over time. More intelligent children would develop a thicker cortex initially, and then their cortex would go through a more rigorous thinning process.[81] Garlick (2010) has argued that this reduction in cortical thickness reflects a pruning process of the neural connections, and this pruning process results in a better ability to identify abstractions in the environment.[82][verification needed] There is "a highly significant association" between the CHRM2 gene and intelligence according to a 2006 Dutch family study. The study concluded that there was an association between the CHRM2 gene on chromosome 7 and Performance IQ, as measured by the Wechsler Adult Intelligence Scale-Revised. The Dutch family study used a sample of 667 individuals from 304 families.[83][non-primary source needed] A similar association was found independently in the Minnesota Twin and Family Study (Comings et al. 2003) and by the Department of Psychiatry at the Washington University.[84][non-primary source needed] Significant injuries isolated to one side of the brain, especially those occurring at a young age, may not significantly affect IQ.[85] Studies reach conflicting conclusions regarding the controversial idea
that brain size correlates positively with IQ. Jensen and Reed claim no direct correlation exists in nonpathological subjects.[86] A more recent meta-analysis suggests otherwise.[87] Malleability (mutability or changeability)Although IQ is believed by many to be immutable,[88] recent research suggests that certain mental activities can change the brain's ability to process information, leading to the conclusion that intelligence can be altered or changed over time. The brain is now properly understood to be neuroplastic and hence far more amenable to change than once was thought. Studies into the neuroscience of animals indicate that challenging activities can produce changes in gene expression patterns of the brain. (Training Degus to Use Rakes [89] and Iriki's earlier research with macaque monkeys indicating brain changes.) A study on young adults published in April 2008 by a team from the Universities of Michigan and Bern supports the possibility of the transfer of fluid intelligence from specifically designed working memory training.[90] Further research will be needed to determine nature, extent and duration of the proposed transfer:[91] Among other questions, it remains to be seen whether the results extend to other kinds of fluid intelligence tests than the matrix test used in the study, and if so, whether, after training, fluid intelligence measures retain their correlation with educational and occupational achievement or if the value of fluid intelligence for predicting performance on other tasks changes. It is also unclear whether the training is durable of extended periods of time. The peak capacity for both fluid intelligence and crystallized intelligence is 26 years. This is followed by a slow decline.[92] Validity and social significanceWhile IQ is sometimes treated as an end unto itself, scholarly work on IQ focuses to a large extent on IQ's validity, that is, the degree to which IQ correlates with outcomes such as job performance, social pathologies, or academic achievement. Different IQ tests differ in their validity for various outcomes.[citation needed] Traditionally, correlation for IQ and outcomes is viewed as a means to also predict performance; however readers should distinguish between prediction in the hard sciences and the social sciences. HealthPeople with a higher IQ have generally lower adult morbidity and mortality. Post-Traumatic Stress Disorder,[93] and schizophrenia[94][95] are less prevalent in higher IQ bands. People in the midsts of a major depressive episode have been shown to have a lower IQ than when without symptoms and lower cognitive ability than people without depression of equivalent verbal intelligence.[96][97] A study of 11,282 individuals in Scotland who took intelligence tests at ages 7, 9 and 11 in the 1950s and 1960s, found an "inverse linear association" between childhood IQ scores and hospital admissions for injuries in adulthood. The association between childhood IQ and the risk of later injury remained even after accounting for factors such as the child's socioeconomic background.[98] Research in Scotland has also shown that a 15-point lower IQ meant people had a fifth less chance of living to 76, while those with a 30-point disadvantage were 37% less likely than those with a higher IQ to live that long.[99] A decrease in IQ has also been shown as an early predictor of late-onset Alzheimer's Disease and other forms of dementia. In a 2004 study, Cervilla and colleagues showed that tests of cognitive ability provide useful predictive information up to a decade before the onset of dementia.[100] However, when diagnosing individuals with a higher level of cognitive ability, in this study those with IQs of 120 or more,[101] patients should not be diagnosed from the standard norm but from an adjusted high-IQ norm that measured changes against the individual's higher ability level. In 2000, Whalley and colleagues published a paper in the journal Neurology, which examined links between childhood mental ability and late-onset dementia. The study showed that mental ability scores were significantly lower in children who eventually developed late-onset dementia when compared with other children tested.[102] Several factors can lead to significant cognitive impairment, particularly if they occur during pregnancy and childhood when the brain is growing and the blood-brain barrier is less effective. Such impairment may sometimes be permanent, or may sometimes be partially or wholly compensated for by later growth. Several harmful factors may also combine, possibly causing greater impairment. Developed nations have implemented several health policies regarding nutrients and toxins known to influence cognitive function. These include laws requiring fortification of certain food products and laws establishing safe levels of pollutants (e.g. lead, mercury, and organochlorides). Comprehensive policy recommendations targeting reduction of cognitive impairment in children have been proposed.[103] In terms of the effect of one's intelligence on health, in one British study, high childhood IQ was shown to
correlate with one's chance of becoming a vegetarian in adulthood.[104] In another British study, high childhood IQ was shown to inversely correlate with the chances of smoking.[105] Other testsOne study found a correlation of .82 between g (general intelligence factor) and SAT scores;[106] another has found correlation of .81 between g and GCSE scores.[107] Correlations between IQ scores (general cognitive ability) and achievement test scores are reported to be .81 by Deary and colleagues, with the percentage of variance
accounted for by general cognitive ability ranging "from 58.6% in Mathematics and 48% in English to 18.1% in Art and Design".[107] School performanceThe American Psychological Association's report Intelligence: Knowns and Unknowns[67] states that wherever it has been studied, children with high scores on tests of intelligence tend to learn more of what is taught in school than their lower-scoring peers. The correlation between IQ scores and grades is about .50. However, this means that they explain only 25% of the variance. Achieving good grades depends on many factors other than IQ, such as "persistence, interest in school, and willingness to study" (p. 81). Job performanceAccording to Frank Schmidt and John Hunter, "for hiring employees without previous experience in the job the most valid predictor of future performance is general mental ability."[108] The validity of IQ as a predictor of job performance is above zero for all work studied to date, but varies with the type of job and across different studies, ranging from 0.2 to 0.6.[109] While IQ is more strongly correlated with reasoning and less so with motor function [110] IQ-test scores predict performance ratings in all occupations.[108] That said, for highly qualified activities (research, management) low IQ scores are more likely to be a barrier to adequate performance, whereas for minimally-skilled activities, athletic strength (manual strength, speed, stamina, and coordination) are more likely to influence performance.[108] It is largely mediated through the quicker acquisition of job-relevant knowledge that IQ predicts job performance. In establishing a causal direction to the link between IQ and work performance, longitudinal studies by Watkins and others suggest that IQ exerts a causal influence on future academic achievement, whereas academic achievement does not substantially influence future IQ scores.[111] Treena Eileen Rohde and Lee Anne Thompson write that general cognitive ability but not specific ability scores predict academic achievement, with the exception that processing speed and spatial ability predict performance on the SAT math beyond the effect of general cognitive ability.[112] The American Psychological Association's report Intelligence: Knowns and Unknowns[67] states that other individual characteristics such as
interpersonal skills, aspects of personality etc. are probably of equal or greater importance, but at this point we do not have equally reliable instruments to measure them.[67] More recently, others argue that since most professional tasks are now standardized or automated, and ranked IQ is a stable measurement over time with high
correlation with many positive personal traits from the general population, it is the best tool to help determine hiring and job placement at any stage in a career, independently of experience, personality bias or any formal training one may
acquire.[citation needed] IncomeSome researchers claim that "in economic terms it appears that the IQ score measures something with decreasing marginal value. It is important to have enough of it, but having lots and lots does not buy you that much."[113][114] Other studies show that ability and performance for jobs are linearly related, such that at all IQ levels, an increase in IQ translates into a concomitant increase in performance.[115] Charles Murray, coauthor of The Bell Curve, found that IQ has a substantial effect on income independently of family background.[116] Taking the above two principles together, very high IQ produces very high job performance, but no greater income than slightly high IQ. Studies also show that high IQ is related to higher net worth.[117] The American Psychological Association's report Intelligence: Knowns and Unknowns[67] states that IQ scores account for about one-fourth of the social status variance and one-sixth of the income variance. Statistical controls for parental SES eliminate about a quarter of this predictive power. Psychometric intelligence appears as only one of a great many factors that influence social outcomes.[67] Some studies claim that IQ only accounts for a sixth of the variation in income because many studies are based on young adults (many of whom have not yet completed their education). On pg 568 of The g Factor, Arthur Jensen claims that although the correlation between IQ and income averages a moderate 0.4 (one sixth or 16% of the variance), the relationship increases with age, and peaks at middle age when people have reached their maximum career potential. In the book, A Question of Intelligence, Daniel Seligman cites an IQ income correlation of 0.5 (25% of the variance). A 2002 study[118] further examined the impact of non-IQ factors on income and concluded that an individual's location, inherited wealth, race, and schooling are more important as factors in determining income than IQ. Other correlations with IQIn addition, IQ and its correlation to health, violent crime, gross state product, and government effectiveness are the subject of a 2006 paper in the publication Intelligence. The paper breaks down IQ averages by U.S. states using the federal government's National Assessment of Educational Progress math and reading test scores as a source.[119] There is a correlation of -0.19 between IQ scores and number of juvenile offences in a large Danish sample; with social class controlled, the correlation dropped to -0.17. Similarly, the correlations for most "negative outcome" variables are typically smaller than 0.20, which means that test scores are associated with less than 4% of their total variance. It is important to realize that the causal links between psychometric ability and social outcomes may be indirect. Children with poor scholastic performance may feel alienated. Consequently, they may be more likely to engage in delinquent behavior, compared to other children who do well.[67] IQ is also negatively correlated with certain diseases. Tambs et al.[120] found that occupational status, educational attainment, and IQ are individually heritable; and further found that "genetic variance influencing educational attainment ... contributed approximately one-fourth of the genetic variance for occupational status and nearly half the genetic variance for IQ." In a sample of U.S. siblings, Rowe et al.[121] report that the inequality in education and income was predominantly due to genes, with shared environmental factors playing a subordinate role. In 2008, intelligence researcher Helmuth Nyborg examined whether IQ relates to denomination and income, using representative data from the National Longitudinal Study of Youth. His results demonstrated that on average, Atheists scored 1.95 IQ points
higher than Agnostics, 3.82 points higher than Liberal persuasions, and 5.89 IQ points higher than Dogmatic persuasions. [122] Group differencesAmong the most controversial issues related to the study of intelligence is the observation that intelligence measures such as IQ scores vary between ethnic and racial groups and sexes. While there is little scholarly debate about the existence of some of these differences, their causes remain highly controversial both within academia and in the public sphere. SexMen and women have statistically significant differences in average scores on tests of particular abilities.[123][124] Studies also illustrate consistently greater variance in the performance of men compared to that of women.[125] IQ tests are weighted on these sex differences so there is no bias on average in favor of one sex, however the consistent difference in variance is not removed. Because the tests are defined so there is no average difference it is difficult to put any meaning on a statement that one sex has a higher intelligence than the other. However
some people have made claims like this even using unbiased IQ tests. For instance, there are claims that men tend to outperform women on average by three to four IQ points based on tests of medical students where the greater variance of men's IQ can be expected to contribute to the result,[126] or where a 'correction' is made for different maturation
ages.[127] RaceThe 1996 Task Force investigation on Intelligence sponsored by the American Psychological Association concluded that there are significant variations in I.Q. across
races.[67] The problem of determining the causes underlying this variation relates to the question of the contributions of "nature and nurture" to I.Q. Psychologists such as
Alan S. Kaufman[128] and Nathan Brody[129] and statisticians such as Bernie
Devlin[130] point out there are insufficient data to conclude that this is because of genetic influences. One of the most notable authors arguing for a strong genetic influence on this score difference is Arthur Jensen. In contrast,
Richard Nisbett, the long-time director of the Culture and Cognition program at the University of Michigan, argues that environmental influences are not sufficiently taken into account in Jensen's writings.[131] AgeFor decades, it has been reported in practitioners' handbooks and textbooks on IQ testing that IQ declines with age after the beginning of adulthood. However, later researchers pointed out that this phenomenon is related to the Flynn effect and is in part a cohort effect rather than a true aging effect. There have been a variety of studies of IQ and aging since the norming of the first Wechsler Intelligence Scale drew attention to IQ differences in different age groups of adults. Current consensus is that fluid intelligence generally declines with age after early adulthood, while crystallized intelligence remains intact. Both cohort effects (the birth year of the test-takers) and practice effects (test-takers taking the same form of IQ test more than once) must be
controlled for to gain accurate data. It is unclear whether any lifestyle intervention can preserve fluid intelligence into older ages.[132] Public policyIn the United States, certain public policies and laws regarding military service,[133][134] education, public benefits,[135] capital punishment,[136] and employment incorporate an individual's IQ into their decisions. However, in the case of Griggs v. Duke Power Co. in 1971, for the purpose of minimizing employment practices that disparately impacted racial minorities, the U.S. Supreme Court banned the use of IQ tests in employment, except in very rare cases.[137] Internationally, certain public policies, such as improving nutrition and prohibiting neurotoxins, have as one of their goals raising, or preventing a decline in, intelligence. Criticism and viewsBinetAlfred Binet, a French psychologist, did not believe that IQ test scales qualified to measure intelligence. He neither invented the term "intelligence quotient" nor supported its numerical expression.[citation needed] He stated:
Binet had designed the Binet-Simon intelligence scale in order to identify students who needed special help in coping with the school curriculum. He argued that with proper remedial education programs, most students regardless of background could catch up and perform quite well in school. He did not believe that intelligence was a measurable fixed entity. Binet cautioned:
The Mismeasure of ManSome scientists dispute psychometrics entirely. In The Mismeasure of Man, Harvard professor and paleontologist Stephen Jay Gould argued that intelligence tests were based on faulty assumptions and showed their history of being used as the basis for scientific racism. He criticized
He spends much of the book criticizing the concept of IQ, including a historical discussion of how the IQ tests were created and a technical discussion of why g is simply a mathematical artifact. Later editions of the book included criticism of The Bell Curve. Bernard Davis wrote that while the nonscientific reviews of The Mismeasure of Man were almost uniformly laudatory, the reviews in the scientific journals were almost all highly
critical.[139] Peter H. SchönemannPsychologist Peter Schönemann has been a persistent critic of IQ, calling it "the IQ
myth".[140] He argued that Spearman's g is a flawed theory and the high heritability estimates of IQ are based on false assumptions.[141] [editEric Turkheimer ] Relation between IQ and intelligenceAccording to Dr. C. George Boeree of Shippensburg University, intelligence is a person's capacity to (1) acquire knowledge (i.e. learn and understand), (2) apply knowledge (solve problems), and (3) engage in abstract reasoning. It is the power of one's intellect, and as such is clearly a very important aspect of one's overall well-being. Psychologists have attempted to measure it for well over a century.[citation needed] IQ is associated with intelligence, but may fail to act as an accurate measure of "intelligence" in its broadest sense. This is partly because IQ tests only examine particular areas embodied by the broadest notion of "intelligence", failing to account for certain areas which are also associated with "intelligence" such as creativity or emotional intelligence.[citation needed] Several other ways of measuring intelligence have been proposed. Daniel Schacter,
Daniel Gilbert, and others have moved beyond general intelligence and IQ as the sole means to describe intelligence.[142] Test biasThe American Psychological Association's report Intelligence: Knowns and Unknowns[67] states that IQ tests as predictors of social achievement are not biased against people of African descent since they predict future performance, such as school achievement, similarly to the way they predict future performance for people of European descent.[67] However, IQ tests may well be biased when used in other situations. A 2005 study stated that "differential validity in prediction suggests that the WAIS-R test may contain cultural influences that reduce the validity of the WAIS-R as a measure of cognitive ability for Mexican American
students,"[143] indicating a weaker positive correlation relative to sampled white students. Other recent studies have questioned the culture-fairness of IQ tests when used in South
Africa.[144][145] Standard intelligence tests, such as the Stanford-Binet, are often inappropriate for children with autism;
the alternative of using developmental or adaptive skills measures are relatively poor measures of intelligence in autistic children, and may have resulted in incorrect claims that a majority of children with autism are mentally retarded.[146] Outdated methodologyA 2006 review article says that
contemporary mainstream test analysis does not reflect substantial recent developments in the field and "bears an uncanny resemblance to the psychometric state of the art as it existed in the 1950s."[147] The view of the American Psychological AssociationIn response to the controversy surrounding The Bell Curve, the American Psychological Association's Board of Scientific Affairs established a task force in 1995 to write a report on the state of intelligence research which could be used by all sides as a basis for discussion. The full text of the report is available through several websites.[67][148] In this paper the representatives of the association regret that IQ-related works are frequently written with a view to their political consequences: "research findings were often assessed not so much on their merits or their scientific standing as on their supposed political implications". The task force concluded that IQ scores do have high predictive validity for individual differences in school achievement. They confirm the predictive validity of IQ for adult occupational status, even when variables such as education and family background have been statistically controlled. They found that individual differences in intelligence are substantially influenced by genetics and that both genes and environment, in complex interplay, are essential to the development of intellectual competence. The report stated that a number of biological factors, including malnutrition, exposure to toxic substances, and various prenatal and perinatal stressors, result in lowered psychometric intelligence under at least some conditions. The task force agrees that large differences do exist between the average IQ scores of blacks and whites, and that these differences cannot be attributed to biases in test construction. The task force suggests that explanations based on social status and cultural differences are possible, and that environmental factors have raised mean test scores in many populations. Regarding genetic causes, they noted that there is not much direct evidence on this point, but what little there is fails to support the genetic hypothesis. The APA journal that published the statement, American Psychologist, subsequently published eleven critical responses in January 1997, several of them arguing that the report failed to examine adequately the evidence for partly-genetic explanations. High IQ societiesThere are social organizations, some international, which limit membership to people who have scores as high as or higher than the 98th percentile on some IQ test or equivalent. Mensa International is perhaps the most well known of these. There are other groups requiring a score above the 98th percentile. Popular culture usageMany websites and magazines use the term IQ to refer to technical or popular knowledge in a variety of subjects not related to intelligence, including sex,[149]poker,[150] and American football,[151] among a wide variety of other topics. These tests are generally not standardized and do not fit within the normal definition of intelligence. Intelligence tests such as the Wechsler Adult Intelligence Scale, Wechsler Intelligence Scale for Children, Stanford-Binet, Woodcock-Johnson III Tests of Cognitive Abilities, or the Kaufman Assessment Battery for Children-II, to name some of the best constructed, are not merely placing a test taker's score within the norm, as presumably are the thousands of alleged "IQ Tests" found on the internet, but they are also testing factors (e.g., fluid and crystallized intelligence, working memory, and the like) that were previously found to represent pure measures of intelligence using factor analysis. This claim may not be made for the hundreds of online tests marketing themselves as IQ Tests, a distinction that may be unfortunately lost upon the public taking them. Reference chartsIQ reference charts are tables suggested by test publishers to divide intelligence ranges in various categories. See also
ReferencesNotes
Bibliography
External links
Wait, did you know that... Many addiction experts suggest that by removing yourself from your typical environment, and your “triggers”, it becomes easier to get and stay sober. With that in mind, would you like to learn about some of the best options for treatment in the country? What societal factors can affect IQ quizlet?Cautions to be considered - every aspect of IQ testing is impacted by culture such as - test content, tests materials, phrasing, test directions and conditions, scoring, test administrator, test behavior, different cognitive style, family factors (poverty, parent education), and language use (lack of fluency).
What best explains why ethnic minorities achieve lower scores on IQ test?What is the BEST explanation for why minorities achieve lower scores on IQ tests? Many intelligence tests are biased because they: reflect the cultures of some test takers more than others.
How does culture affect IQ scores?According to some researchers, the “cultural specificity” of intelligence makes IQ tests biased towards the environments in which they were developed – namely white, Western society. This makes them potentially problematic in culturally diverse settings.
What does research on IQ scores show quizlet?It compares the test-taker's raw score to the scores of standardization same of same-age individuals, whose performances form a normal distribution, with mean IQ set at 100. IQ reflects the extent to which test performance deviates from the standardization-sample mean.
|