| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

How We Know What Isn't So

This version was saved 15 years, 2 months ago View current version     Page history
Saved by Martin Poulter
on February 4, 2009 at 3:27:00 pm
 

Notes on How We Know What Isn't So: The fallibility of human reason in everyday life by Thomas Gilovich, 1991, Simon & Schuster. ISBN: 0029117062 (Google Books info)

 

Up: Books for Non-Specialists

 

I'd highly recommend this as one of the starting points to anyone interested in learning about cognitive biases. Gilovich is both a prominent researcher and an excellently readable writer. He encourages the reader to think like a bias researcher. Using logical principles, we can get hypotheses about which biases to expect. With controlled experiments, we can verify the existence of biases and by applying them to specific topics we can see how natural human fallibility can have disastrous results. Cordelia Fine's "A Mind of Its Own" complements it nicely, and is more recent.

 

Outline

Part One: Cognitive Determinants of Questionable Beliefs

 

Ch. 2: Something Out of Nothing: The Misperception and Misinterpretation of Random Data

  • Misperception of random events: "Hot hand" in sports performance has not been found in objective analysis of data, but the belief persists very strongly. One explanation is confirmation bias (ch. 4). Another is clustering illusion: people do not expects the "runs" that occur in random data. So when they observe those runs they do not interpret them as chance.
  • Kahneman & Tversky's explanation of clustering illusion is overapplication of the representativeness heuristic.
  • A random pattern (e.g. rocket landings in World War II London; stars in the sky) can be divided up in multiple ways. In some of these divisions, the pattern will appear strongly non-random. Even a statistical test might give a strong indication that the pattern is non-random, but this doesn't mean anything because it doesn't take into account the arbitray partitioning.
  • Once we've been convinced of belief in a phenomenon, confabulation kicks in: we are good are generating satisfying (to ourselves) ad-hoc explanations of an outcome.
  • Insensitivity to regression effects: classic experiment is Kahneman & Tversky's (1973) "On the psychology of prediction". S's estimating others' exam scores on the basis of scores on a sense of humour test totally ignore regression to the mean. Again, over-application of representativeness seems to be the root.
    • Specious learning about reward and punishment (Schaffner 1985)
    • Example of how bias reinforces superstition: unusual rash of disease in Israel was attributed by rabbis to the recently-introduced practice of letting women attend funerals. Once the prohibition on women was introduced, the outbreak diminished, apparently confirming the authority of the rabbis and the existence of God.

 

Ch. 3: Too Much from Too Little: The Misinterpretation of Incomplete and Unrepresentative Data

  • Excessive impact of confirmatory information: when forming impressions of the relation between two variables, we focus on one cell rather than using all four cells.
    • Crocker (1982) expt.: subjects use different information strategies to assess whether tennis practice is related to winning versus whether it is related to losing.
    • Particular problem with asymmetric variables, i.e. a presence versus an absence.
  • Tendency to seek confirmatory information: Wason conditional reasoning experiment shows this is not a motivation effect.
    • In ordinary social settings, subjects are good at asking informative (rather than primarily confirmatory) questions (bunch of refs for this) although there are bias effects visible: people ask different questions to find out if someone is introverted from what they ask to find if they are extraverted.
    • Snyder & Cantor (1979): Ss recall a description they were told two days beforehand to assess a person for suitability either for sales or for librarianship. They recall examples that favour suitability for the suggested job.
    • Westerners judged East Germany and West Germany as more similar than Sri Lanka and Nepal. Also as more dissimilar than Sri Lanka and Nepal. This could be explained by a search for features that fit the question, and richer knowledge of the Germanies.
  • Problem of Absent Data:
    • Consider a case where one measure is supposed to be a predictor of another (e.g. school exam scores and university performance) but we only take the second test for people who perform sufficiently well on the first on (e.g. university admission, job interviews, policies in all sorts of areas). Because not all four cells are being used, there isn't a basis for judging the effectiveness of the selection process. The absent information is how well the rejected applicants would have performed.
    • Absent information makes room for bias: it's easy to criticise someone else's decision, whatever it is, because you don't see what would have happened if they had made an alternative choice.
    • Social problem of absent data. If we mistakenly judge someone as unpleasant (e.g. by a stereotype) we don't spend time getting to know them and hence our perception isn't corrected. On the other hand, if we mistakenly perceive someone as good to get on with, we'll find out we're wrong as we get to know them. Hence there is a systematic bias towards perceiving other people as unpleasant.

 

Ch. 4: Seeing What We Expect to See: The Biased Evaluation of Ambiguous and Inconsistent Data

 

Part Two: Motivational and Social Determinants of Questionable Beliefs

 

Ch. 5: Seeing What We Want to See: Motivational Determinants of Belief

 

Ch. 6: Believing What We Are Told: The Biasing Effects of Secondhand Information

 

Ch. 7: The Imagined Agreement of Others: Exaggerated Impressions of Social Support

  • False consensus effect: subjects exaggerating the extent to which others share their beliefs, preferences or attitudes (classic experiment is Ross, Greene & House (1977) in J. Exp. Soc. Psych.)
  • False consensus has a variety of motivational and cognitive mediators
    • Motivational example: false consensus effect is stronger when there is an emotional investment in the belief (Crano 1983, "Assumed consensus of attitudes: the effect of vested interest").
    • People choose sources of information (and other people) that reinforce their existing beliefs.
    • People interpret questions in different ways (e.g. dram category boundaries differently) but do not correct for the fact that other people's interpretations are different.
      • Differences of opinion are not always differences of "judgement of the object" but often differences in the "object of judgement".
    • Actor-observer differences in attribution affect false consensus. When we think our behaviour is due to external circumstances, there is greater false consensus because we expect others to be affected the same way by those circumstances .
    • One important process maintaining false consensus is the lack of negative feedback on behaviour due to adult etiquette.
      • Children merrily taunt each other for breaking social norms. Adults learn not to do this; to minimise their apparent disagreement, avoid pointing out when a man has his fly undone, for example. Harmony is valued more than expressing honest opinions, though this clearly differs between individuals. In organisational contexts, this tendency can manifest as groupthink. It's etiquette not to discuss politics or religion with someone who isn't well known. Hence people can drift away from social norms, without realising, because they don't get negative feedback from their peers.

 

Part Three: Examples of Questionable and Erroneous Beliefs

 

Ch. 8: Belief in Ineffective "Alternative" Health Practices

 

Ch. 9: Belief in the Effectiveness of Questionable Interpersonal Strategies

 

Ch. 10: Belief in ESP

 

Part Four: Where Do We Go from Here?

 

Ch. 11: Challenging Dubious Beliefs: The Role of Social Science

 

Comments (0)

You don't have permission to comment on this page.