|
How We Know What Isn't So
Page history
last edited
by Martin Poulter 15 years, 10 months ago
Notes on How We Know What Isn't So: The fallibility of human reason in everyday life by Thomas Gilovich, 1991, Simon & Schuster. ISBN: 0029117062 (Google Books info)
Up: Books for Non-Specialists
I'd highly recommend this as one of the starting points to anyone interested in learning about cognitive biases. Gilovich is both a prominent researcher and an excellently readable writer. He encourages the reader to think like a bias researcher. Using logical principles, we can get hypotheses about which biases to expect. With controlled experiments, we can verify the existence of biases and by applying them to specific topics we can see how natural human fallibility can have disastrous results. Cordelia Fine's "A Mind of Its Own" complements it nicely, and is more recent.
Outline
Part One: Cognitive Determinants of Questionable Beliefs
Ch. 2: Something Out of Nothing: The Misperception and Misinterpretation of Random Data
- Misperception of random events: "Hot hand" in sports performance has not been found in objective analysis of data, but the belief persists very strongly. One explanation is confirmation bias (ch. 4). Another is clustering illusion: people do not expects the "runs" that occur in random data. So when they observe those runs they do not interpret them as chance.
- Kahneman & Tversky's explanation of clustering illusion is overapplication of the representativeness heuristic.
- A random pattern (e.g. rocket landings in World War II London; stars in the sky) can be divided up in multiple ways. In some of these divisions, the pattern will appear strongly non-random. Even a statistical test might give a strong indication that the pattern is non-random, but this doesn't mean anything because it doesn't take into account the arbitray partitioning.
- Once we've been convinced of belief in a phenomenon, confabulation kicks in: we are good are generating satisfying (to ourselves) ad-hoc explanations of an outcome.
- Insensitivity to regression effects: classic experiment is Kahneman & Tversky's (1973) "On the psychology of prediction". S's estimating others' exam scores on the basis of scores on a sense of humour test totally ignore regression to the mean. Again, over-application of representativeness seems to be the root.
- Specious learning about reward and punishment (Schaffner 1985)
- Example of how bias reinforces superstition: unusual rash of disease in Israel was attributed by rabbis to the recently-introduced practice of letting women attend funerals. Once the prohibition on women was introduced, the outbreak diminished, apparently confirming the authority of the rabbis and the existence of God.
Ch. 3: Too Much from Too Little: The Misinterpretation of Incomplete and Unrepresentative Data
- Excessive impact of confirmatory information: when forming impressions of the relation between two variables, we focus on one cell rather than using all four cells.
- Crocker (1982) expt.: subjects use different information strategies to assess whether tennis practice is related to winning versus whether it is related to losing.
- Particular problem with asymmetric variables, i.e. a presence versus an absence.
- Tendency to seek confirmatory information: Wason conditional reasoning experiment shows this is not a motivation effect.
- In ordinary social settings, subjects are good at asking informative (rather than primarily confirmatory) questions (bunch of refs for this) although there are bias effects visible: people ask different questions to find out if someone is introverted from what they ask to find if they are extraverted.
- Snyder & Cantor (1979): Ss recall a description they were told two days beforehand to assess a person for suitability either for sales or for librarianship. They recall examples that favour suitability for the suggested job.
- Westerners judged East Germany and West Germany as more similar than Sri Lanka and Nepal. Also as more dissimilar than Sri Lanka and Nepal. This could be explained by a search for features that fit the question, and richer knowledge of the Germanies.
- Problem of Absent Data:
- Consider a case where one measure is supposed to be a predictor of another (e.g. school exam scores and university performance) but we only take the second test for people who perform sufficiently well on the first on (e.g. university admission, job interviews, policies in all sorts of areas). Because not all four cells are being used, there isn't a basis for judging the effectiveness of the selection process. The absent information is how well the rejected applicants would have performed.
- Absent information makes room for bias: it's easy to criticise someone else's decision, whatever it is, because you don't see what would have happened if they had made an alternative choice.
- Social problem of absent data. If we mistakenly judge someone as unpleasant (e.g. by a stereotype) we don't spend time getting to know them and hence our perception isn't corrected. On the other hand, if we mistakenly perceive someone as good to get on with, we'll find out we're wrong as we get to know them. Hence there is a systematic bias towards perceiving other people as unpleasant.
Ch. 4: Seeing What We Expect to See: The Biased Evaluation of Ambiguous and Inconsistent Data
- The advantages and disadvantages of heuristics
- The behaviour of sportsmen while wearing mostly black uniforms perceived as more aggressive than the same play wearing white. Teams in black uniforms are penalised significantly more.
- Confirmation bias in evaluating arguments for the death penalty (Lord, Ross and Lepper (1979)) Subjects did not mistake the direction of evidence, but did find ways to minimise the weight of contrary evidence.
- Study of introspections in gambling: gamblers ruminate about their losses better, and remember them better, than gains. Availability predicts the opposite (dissonance versus heuristics!). Reason seems to be that they use cognitive effort to find face-saving explanations for the losses ("flukes"), while the wins need no such explanation.
- Biased evaluation of scientific findings: confirmation bias in peer review (Mahoney (1977))
- crucial to science is specifying "success" and "failure" in advance: something rarely done in everyday life. E.g. the predictions of "psychics"; significant events happening "in threes" (but in an undefined time period)
- Multiple endpoints and complex phenomena produce a situation where a match is almost inevitable, e.g. the Barnum effect.
- How is it reconciled that we selectively forget negative information, yet in the gambling experiment, process negative outcomes more intensely? The difference is between one-sided (e.g. a full Moon) and two-sided (e.g. win versus a loss) outcomes. Role of rumination about "what might have been"
- Confirmations are usually one-sided event: the non-occurrence of a confirming event isn't an event in itself.
- Temporally focused events are usually two-sided, e.g. result of a sports match; S's reading a fake dream diary looking for predictive dreams find a greater ratio of "confirmations" relative to "contradictions" when they appear in the next day than when they appear over a broader interval.
- Hedonic asymmetry: can be very one-sided. losing keys stays in the mind more than having them. Times when a couple are "out of sync" can stay in the mind more than when they are harmonious, giving a biased negative view of the relationship. It's possible to see five buses going the other way before your bus arrives, but not possible to make the converse observation.
- Certain events are one-sided by definition "people only profit from advice when they are ready for it."
- One-sided versus two-sided seems to be mostly about base-rate.
Part Two: Motivational and Social Determinants of Questionable Beliefs
Ch. 5: Seeing What We Want to See: Motivational Determinants of Belief
Ch. 6: Believing What We Are Told: The Biasing Effects of Secondhand Information
Ch. 7: The Imagined Agreement of Others: Exaggerated Impressions of Social Support
- False consensus effect: subjects exaggerating the extent to which others share their beliefs, preferences or attitudes (classic experiment is Ross, Greene & House (1977) in J. Exp. Soc. Psych.)
- False consensus has a variety of motivational and cognitive mediators
- Motivational example: false consensus effect is stronger when there is an emotional investment in the belief (Crano 1983, "Assumed consensus of attitudes: the effect of vested interest").
- People choose sources of information (and other people) that reinforce their existing beliefs.
- People interpret questions in different ways (e.g. dram category boundaries differently) but do not correct for the fact that other people's interpretations are different.
- Differences of opinion are not always differences of "judgement of the object" but often differences in the "object of judgement".
- Actor-observer differences in attribution affect false consensus. When we think our behaviour is due to external circumstances, there is greater false consensus because we expect others to be affected the same way by those circumstances .
- One important process maintaining false consensus is the lack of negative feedback on behaviour due to adult etiquette.
- Children merrily taunt each other for breaking social norms. Adults learn not to do this; to minimise their apparent disagreement, avoid pointing out when a man has his fly undone, for example. Harmony is valued more than expressing honest opinions, though this clearly differs between individuals. In organisational contexts, this tendency can manifest as groupthink. It's etiquette not to discuss politics or religion with someone who isn't well known. Hence people can drift away from social norms, without realising, because they don't get negative feedback from their peers.
Part Three: Examples of Questionable and Erroneous Beliefs
Ch. 8: Belief in Ineffective "Alternative" Health Practices
Ch. 9: Belief in the Effectiveness of Questionable Interpersonal Strategies
Ch. 10: Belief in ESP
Part Four: Where Do We Go from Here?
Ch. 11: Challenging Dubious Beliefs: The Role of Social Science
How We Know What Isn't So
|
Tip: To turn text into a link, highlight the text, then click on a page or file from the list above.
|
|
|
|
|
Comments (0)
You don't have permission to comment on this page.