| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

How We Know What Isn't So

This version was saved 15 years, 3 months ago View current version     Page history
Saved by Martin Poulter
on January 20, 2009 at 4:07:45 pm
 

Notes on How We Know What Isn't So: The fallibility of human reason in everyday life by Thomas Gilovich, 1991, Simon & Schuster. ISBN: 0029117062 (Google Books info)

 

Up: Books for Non-Specialists

 

I'd highly recommend this as one of the starting points to anyone interested in learning about cognitive biases. Gilovich is both a prominent researcher and an excellently readable writer. He encourages the reader to think like a bias researcher. Using logical principles, we can get hypotheses about which biases to expect. With controlled experiments, we can verify the existence of biases and by applying them to specific topics we can see how natural human fallibility can have disastrous results. Cordelia Fine's "A Mind of Its Own" complements it nicely, and is more recent.

 

Outline

Part One: Cognitive Determinants of Questionable Beliefs

 

Ch. 2: Something Out of Nothing: The Misperception and Misinterpretation of Random Data

  • Misperception of random events: "Hot hand" in sports performance has not been found in objective analysis of data, but the belief persists very strongly. One explanation is confirmation bias (ch. 4). Another is clustering illusion: people do not expects the "runs" that occur in random data. So when they observe those runs they do not interpret them as chance.
  • Kahneman & Tversky's explanation of clustering illusion is overapplication of the representativeness heuristic.
  • A random pattern (e.g. rocket landings in World War II London; stars in the sky) can be divided up in multiple ways. In some of these divisions, the pattern will appear strongly non-random. Even a statistical test might give a strong indication that the pattern is non-random, but this doesn't mean anything because it doesn't take into account the arbitray partitioning.
  • Once we've been convinced of belief in a phenomenon, confabulation kicks in: we are good are generating satisfying (to ourselves) ad-hoc explanations of an outcome.
  • Insensitivity to regression effects: classic experiment is Kahneman & Tversky's (1973) "On the psychology of prediction". S's estimating others' exam scores on the basis of scores on a sense of humour test totally ignore regression to the mean. Again, over-application of representativeness seems to be the root.

 

Ch. 3: Too Much from Too Little: The Misinterpretation of Incomplete and Unrepresentative Data

 

Ch. 4: Seeing What We Expect to See: The Biased Evaluation of Ambiguous and Inconsistent Data

 

Part Two: Motivational and Social Determinants of Questionable Beliefs

 

Ch. 5: Seeing What We Want to See: Motivational Determinants of Belief

 

Ch. 6: Believing What We Are Told: The Biasing Effects of Secondhand Information

 

Ch. 7: The Imagined Agreement of Others: Exaggerated Impressions of Social Support

  • False consensus effect: subjects exaggerating the extent to which others share their beliefs, preferences or attitudes (classic experiment is Ross, Greene & House (1977) in J. Exp. Soc. Psych.)
  • False consensus has a variety of motivational and cognitive mediators
    • Motivational example: false consensus effect is stronger when there is an emotional investment in the belief (Crano 1983, "Assumed consensus of attitudes: the effect of vested interest").
    • People choose sources of information (and other people) that reinforce their existing beliefs.
    • People interpret questions in different ways (e.g. dram category boundaries differently) but do not correct for the fact that other people's interpretations are different.
      • Differences of opinion are not always differences of "judgement of the object" but often differences in the "object of judgement".
    • Actor-observer differences in attribution affect false consensus. When we think our behaviour is due to external circumstances, there is greater false consensus because we expect others to be affected the same way by those circumstances .
    • One important process maintaining false consensus is the lack of negative feedback on behaviour due to adult etiquette.
      • Children merrily taunt each other for breaking social norms. Adults learn not to do this; to minimise their apparent disagreement, avoid pointing out when a man has his fly undone, for example. Harmony is valued more than expressing honest opinions, though this clearly differs between individuals. In organisational contexts, this tendency can manifest as groupthink. It's etiquette not to discuss politics or religion with someone who isn't well known. Hence people can drift away from social norms, without realising, because they don't get negative feedback from their peers.

 

Part Three: Examples of Questionable and Erroneous Beliefs

 

Ch. 8: Belief in Ineffective "Alternative" Health Practices

 

Ch. 9: Belief in the Effectiveness of Questionable Interpersonal Strategies

 

Ch. 10: Belief in ESP

 

Part Four: Where Do We Go from Here?

 

Ch. 11: Challenging Dubious Beliefs: The Role of Social Science

Comments (0)

You don't have permission to comment on this page.